camera data
Waymo has 'no plans' to sell ads to riders based on camera data
Rumors circulated today that robotaxi company Waymo might use data from vehicles' interior cameras to train AI and sell targeted ads to riders. However, the company has tried to quell concerns, insisting that it won't be targeting ads to passengers. The situation arose after researcher and engineer Jane Manchun Wong discovered an unreleased version of Waymo's privacy policy that suggested the robotaxi company could start using data from its vehicles to train generative AI. The draft policy has language allowing customers to opt out of Waymo "using your personal information (including interior camera data associated with your identity) for training GAI." Wong's discovery also suggested that Waymo could use that camera footage to sell personalized ads to riders. Later in the day, The Verge obtained comments on this unreleased privacy policy from Waymo spokesperson Julia Ilina.
- Information Technology > Services (0.61)
- Information Technology > Security & Privacy (0.41)
CamViG: Camera Aware Image-to-Video Generation with Multimodal Transformers
Marmon, Andrew, Schindler, Grant, Lezama, José, Kondratyuk, Dan, Seybold, Bryan, Essa, Irfan
We extend multimodal transformers to include 3D camera motion as a conditioning signal for the task of video generation. Generative video models are becoming increasingly powerful, thus focusing research efforts on methods of controlling the output of such models. We propose to add virtual 3D camera controls to generative video methods by conditioning generated video on an encoding of three-dimensional camera movement over the course of the generated video. Results demonstrate that we are (1) able to successfully control the camera during video generation, starting from a single frame and a camera signal, and (2) we demonstrate the accuracy of the generated 3D camera paths using traditional computer vision methods.
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- Media > Television (0.58)
- Media > Photography (0.58)
- Media > Film (0.58)
Multi-log grasping using reinforcement learning and virtual visual servoing
Wallin, Erik, Wiberg, Viktor, Servin, Martin
We explore multi-log grasping using reinforcement learning and virtual visual servoing for automated forwarding in a simulated environment. Automation of forest processes is a major challenge, and many techniques regarding robot control pose different challenges due to the unstructured and harsh outdoor environment. Grasping multiple logs involves various problems of dynamics and path planning, where understanding the interaction between the grapple, logs, terrain, and obstacles requires visual information. To address these challenges, we separate image segmentation from crane control and utilise a virtual camera to provide an image stream from reconstructed 3D data. We use Cartesian control to simplify domain transfer to real-world applications. Since log piles are static, visual servoing using a 3D reconstruction of the pile and its surroundings is equivalent to using real camera data until the point of grasping. This relaxes the limits on computational resources and time for the challenge of image segmentation and allows for collecting data in situations where the log piles are not occluded. The disadvantage is the lack of information during grasping. We demonstrate that this problem is manageable and present an agent that is 95% successful in picking one or several logs from challenging piles of 2--5 logs.
HVDetFusion: A Simple and Robust Camera-Radar Fusion Framework
Lei, Kai, Chen, Zhan, Jia, Shuman, Zhang, Xiaoteng
In the field of autonomous driving, 3D object detection is a very important perception module. Although the current SOTA algorithm combines Camera and Lidar sensors, limited by the high price of Lidar, the current mainstream landing schemes are pure Camera sensors or Camera+Radar sensors. In this study, we propose a new detection algorithm called HVDetFusion, which is a multi-modal detection algorithm that not only supports pure camera data as input for detection, but also can perform fusion input of radar data and camera data. The camera stream does not depend on the input of Radar data, thus addressing the downside of previous methods. In the pure camera stream, we modify the framework of Bevdet4D for better perception and more efficient inference, and this stream has the whole 3D detection output. Further, to incorporate the benefits of Radar signals, we use the prior information of different object positions to filter the false positive information of the original radar data, according to the positioning information and radial velocity information recorded by the radar sensors to supplement and fuse the BEV features generated by the original camera data, and the effect is further improved in the process of fusion training. Finally, HVDetFusion achieves the new state-of-the-art 67.4\% NDS on the challenging nuScenes test set among all camera-radar 3D object detectors. The code is available at https://github.com/HVXLab/HVDetFusion
- Transportation > Ground > Road (0.34)
- Information Technology (0.34)
Safety Assessment for Autonomous Systems' Perception Capabilities
Autonomous Systems (AS) are increasingly proposed, or used, in Safety Critical (SC) applications. Many such systems make use of sophisticated sensor suites and processing to provide scene understanding which informs the AS' decision-making. The sensor processing typically makes use of Machine Learning (ML) and has to work in challenging environments, further the ML-algorithms have known limitations,e.g., the possibility of false-negatives or false-positives in object classification. The well-established safety-analysis methods developed for conventional SC systems are not well-matched to AS, ML, or the sensing systems used by AS. This paper proposes an adaptation of well-established safety-analysis methods to address the specifics of perception-systems for AS, including addressing environmental effects and the potential failure-modes of ML, and provides a rationale for choosing particular sets of guidewords, or prompts, for safety-analysis. It goes on to show how the results of the analysis can be used to inform the design and verification of the AS and illustrates the new method by presenting a partial analysis of a road vehicle. Illustrations in the paper are primarily based on optical sensing, however the paper discusses the applicability of the method to other sensing modalities and its role in a wider safety process addressing the overall capabilities of AS.
- Europe > United Kingdom (0.14)
- North America > United States > District of Columbia > Washington (0.04)
- North America > United States > Arizona > Maricopa County > Tempe (0.04)
- (6 more...)
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks (1.00)
- Transportation > Infrastructure & Services (0.93)
- (4 more...)
Powering Data-Driven Autonomy at Scale with Camera Data
At Woven Planet Level 5, we're using machine learning (ML) to build an autonomous driving system that improves as it observes more human driving. This is based on our Autonomy 2.0 approach, which leverages machine learning and data to solve the complex task of driving safely. This is unlike traditional systems, where engineers hand-design rules for every possible driving event. Last year, we took a critical step in delivering on Autonomy 2.0 by using an ML model to power our motion planner, the core decision-making module of our self-driving system. We saw the ML Planner's performance improve as we trained it on more human driving data.
- Automobiles & Trucks (1.00)
- Transportation > Ground > Road (0.51)
- Information Technology > Robotics & Automation (0.36)
Arm unveils image processor for driver assistance and automation
Arm has introduced a design for an automotive image signal processor to enhance driver assistance and automation technologies. The Arm Mali-C78AE image signal processor (ISP) is part of Arm's AE line of safety-capable intellectual property suitable for advanced drivers assistance systems (ADAS) and human vision applications. It's the first product announcement since Nvidia called off the $80 billion acquisition of Arm last week. The first licensee for the tech is Intel's Mobileye, which is licenses the Mali-C78AE and the next-generation EyeQ technology. ADAS tech has grown from a premium vehicle feature to a capability consumers now expect as standard in new vehicles, as the systems have helped with driver safety.
- Automobiles & Trucks > Manufacturer (0.82)
- Transportation > Ground > Road (0.51)
Tesla activates facial recognition cameras in its cars for safety
Tesla CEO Elon Musk explained the reason for the Model 3's mysterious cockpit camera. Tesla has activated a facial recognition technology in its Model 3 and Model Y vehicles to ensure that the driver is in the seat and watching the road when using the cars' advanced driver assist systems. The two vehicles have always been produced with the cameras near the rearview mirrors, which Elon Musk initially said would be used to monitor for vandalism when their promised fully autonomous driving capability is launched and they are deployed as ride-hailing vehicles. Tesla had been criticized for not using the technology after several reports of drivers riding in the front passenger and rear seats of the vehicles with no one behind the wheel surfaced. Tesla's Autopilot and Full Self-Driving features also require the driver to keep a hand on the wheel, but this method is easily spoofed by attaching a weight to it.
- Transportation > Passenger (0.77)
- Transportation > Ground > Road (0.57)
The Incredible Ways Shell Uses Artificial Intelligence To Help Transform The Oil And Gas Giant
Royal Dutch Shell is heavily investing in research and development of artificial intelligence (AI), which it hopes will provide solutions to some of its most pressing challenges. From meeting the demands of a transitioning energy market, urgently in need of cleaner and more efficient power, to improving safety on the forecourts of its service stations, AI is at the top of the agenda. I have been working with Shell over the past months to help create a data strategy, which gave me a thorough insight into Shell's AI priorities and initiatives. Current initiatives include deploying reinforcement learning in its exploration and drilling program, to reduce the cost of extracting the gas that still drives a significant proportion of its revenues. Elsewhere across its global business, Shell is rolling out AI at its public electric car charging stations, to manage the shifting demand for power throughout a day.
Can Artificial Intelligence Help Transform Royal Dutch Shell - The Oil And Gas Giant?
Royal Dutch Shell is heavily investing in research and development of artificial intelligence (AI), which it hopes will provide solutions to some of its most pressing challenges. From meeting the demands of a transitioning energy market, urgently in need of cleaner and more efficient power, to improving safety on the forecourts of its service stations, AI is at the top of the agenda. I have been working with Shell over the past months to help create a data strategy, which gave me a thorough insight into Shell's AI priorities and initiatives. Current initiatives include deploying reinforcement learning in its exploration and drilling program, to reduce the cost of extracting the gas that still drives a significant proportion of its revenues. Elsewhere across its global business, Shell is rolling out AI at its public electric car charging stations, to manage the shifting demand for power throughout a day.
- Transportation > Ground > Road (1.00)
- Energy > Oil & Gas (1.00)